One issue that arose during GLR's development was that considerable overhead could be introduced by graphics hardware context switching. An early version of glrmanager attempted to tile frame buffer rectangles within the physical frame buffer to ensure no overlap. This permits multiple render intervals to be simultaneously active.
The result was a substantial decrease in overall performance when there was contention from multiple GLR clients due to the overhead of context switching the graphics hardware state. High-end graphics subsystems tend to be very stateful and context switching is therefore relatively expensive. Multiple scheduled GLR render intervals meant context switching occurred. If glrmanager instead only schedules a single GLR render interval at a time, context switching overhead is minimized and overall throughput can be improved.
In addition to context switching of basic hardware rendering state, swapping of texture memory can be particularly expensive. Multiple instances of volren (both the original and the GLR capable version) will oversubscribe a RealityEngine's total texture memory. The resulting texture memory swapping greatly slows rendering performance.
For traditional high-end graphics applications that tend to monopolize the graphics hardware, context switching and texture swapping overhead is not an issue, but for systems to be used as render servers, sharing the hardware via GLR is likely to create more context switching and texture swapping overhead. Future high-end graphics subsystem designs should place more importance on minimizing context switching overheads.